Skip to content
This repository has been archived by the owner on Jan 6, 2023. It is now read-only.

Update actions/download-artifact action to v3 #3

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jan 3, 2023

Mend Renovate

This PR contains the following updates:

Package Type Update Change
actions/download-artifact action major v2 -> v3

Release Notes

actions/download-artifact

v3

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@github-actions
Copy link

github-actions bot commented Jan 3, 2023

Kubeval: 📖success

Show results
WARN - Set to ignore missing schemas
PASS - ./manifests/hcloud_secret.yaml contains a valid Secret (kube-system.cloud-provider-credentials)
WARN - Set to ignore missing schemas
PASS - ./manifests/helm_secret.yaml contains a valid Secret (argocd.helm-secrets)
WARN - Set to ignore missing schemas
WARN - ./manifests/machinedeployment.yaml containing a MachineDeployment (kube-system.test-cpx31) was not validated against a schema

Author: @renovate[bot],
Action: pull_request

@github-actions
Copy link

github-actions bot commented Jan 3, 2023

Terraform Plan 📖success

Show Plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://app.terraform.io/app/cedi/k8s-cedi-dev/runs/run-WburR8JWhVJZnzpu

Waiting for the plan to start...

Terraform v1.2.3
on linux_amd64
Initializing plugins and modules...
hcloud_load_balancer.load_balancer: Refreshing state... [id=1009799]
hcloud_placement_group.control_plane_placement: Refreshing state... [id=110047]
hcloud_firewall.api-fw: Refreshing state... [id=659501]
hcloud_load_balancer_service.api_service: Refreshing state... [id=1009799__6443]
cloudflare_record.dns_api_v4: Refreshing state... [id=d7fcd07426121f27b511cfbd45e35a83]
cloudflare_record.dns_api_v6: Refreshing state... [id=747d6290cf949edc4ec5ca47c4e11c3c]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # cloudflare_record.dns_v4_api1 will be created
  + resource "cloudflare_record" "dns_v4_api1" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "api1"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = 1
      + type            = "A"
      + value           = (known after apply)
      + zone_id         = "69e6bcd8df08222aae8f102910c5df9d"
    }

  # cloudflare_record.dns_v4_api2 will be created
  + resource "cloudflare_record" "dns_v4_api2" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "api2"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = 1
      + type            = "A"
      + value           = (known after apply)
      + zone_id         = "69e6bcd8df08222aae8f102910c5df9d"
    }

  # cloudflare_record.dns_v4_api3 will be created
  + resource "cloudflare_record" "dns_v4_api3" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "api3"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = 1
      + type            = "A"
      + value           = (known after apply)
      + zone_id         = "69e6bcd8df08222aae8f102910c5df9d"
    }

  # cloudflare_record.dns_v6_api1 will be created
  + resource "cloudflare_record" "dns_v6_api1" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "api1"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = 1
      + type            = "AAAA"
      + value           = (known after apply)
      + zone_id         = "69e6bcd8df08222aae8f102910c5df9d"
    }

  # cloudflare_record.dns_v6_api2 will be created
  + resource "cloudflare_record" "dns_v6_api2" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "api2"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = 1
      + type            = "AAAA"
      + value           = (known after apply)
      + zone_id         = "69e6bcd8df08222aae8f102910c5df9d"
    }

  # cloudflare_record.dns_v6_api3 will be created
  + resource "cloudflare_record" "dns_v6_api3" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "api3"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = 1
      + type            = "AAAA"
      + value           = (known after apply)
      + zone_id         = "69e6bcd8df08222aae8f102910c5df9d"
    }

  # hcloud_load_balancer_network.load_balancer will be created
  + resource "hcloud_load_balancer_network" "load_balancer" {
      + enable_public_interface = true
      + id                      = (known after apply)
      + ip                      = (known after apply)
      + load_balancer_id        = 1009799
      + subnet_id               = (known after apply)
    }

  # hcloud_load_balancer_target.lb_target_cp1 will be created
  + resource "hcloud_load_balancer_target" "lb_target_cp1" {
      + id               = (known after apply)
      + load_balancer_id = 1009799
      + server_id        = (known after apply)
      + type             = "server"
      + use_private_ip   = true
    }

  # hcloud_load_balancer_target.lb_target_cp2 will be created
  + resource "hcloud_load_balancer_target" "lb_target_cp2" {
      + id               = (known after apply)
      + load_balancer_id = 1009799
      + server_id        = (known after apply)
      + type             = "server"
      + use_private_ip   = true
    }

  # hcloud_load_balancer_target.lb_target_cp3 will be created
  + resource "hcloud_load_balancer_target" "lb_target_cp3" {
      + id               = (known after apply)
      + load_balancer_id = 1009799
      + server_id        = (known after apply)
      + type             = "server"
      + use_private_ip   = true
    }

  # hcloud_network.net will be created
  + resource "hcloud_network" "net" {
      + delete_protection = false
      + id                = (known after apply)
      + ip_range          = "192.168.0.0/16"
      + name              = "cedi-dev"
    }

  # hcloud_network_subnet.kubeone will be created
  + resource "hcloud_network_subnet" "kubeone" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "192.168.0.0/16"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "server"
    }

  # hcloud_rdns.rdns_api1 will be created
  + resource "hcloud_rdns" "rdns_api1" {
      + dns_ptr    = "api1.cedi.dev"
      + id         = (known after apply)
      + ip_address = (known after apply)
      + server_id  = (known after apply)
    }

  # hcloud_rdns.rdns_api2 will be created
  + resource "hcloud_rdns" "rdns_api2" {
      + dns_ptr    = "api2.cedi.dev"
      + id         = (known after apply)
      + ip_address = (known after apply)
      + server_id  = (known after apply)
    }

  # hcloud_rdns.rdns_api3 will be created
  + resource "hcloud_rdns" "rdns_api3" {
      + dns_ptr    = "api3.cedi.dev"
      + id         = (known after apply)
      + ip_address = (known after apply)
      + server_id  = (known after apply)
    }

  # hcloud_server.control_plane1 will be created
  + resource "hcloud_server" "control_plane1" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-20.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster" = "cedi-dev"
          + "role"    = "api"
        }
      + location                   = "nbg1"
      + name                       = "api1.cedi.dev"
      + placement_group_id         = 110047
      + rebuild_protection         = false
      + server_type                = "cx21"
      + ssh_keys                   = [
          + "cedi@ivy",
          + "cedi@mae",
          + "ghaction",
        ]
      + status                     = (known after apply)
    }

  # hcloud_server.control_plane2 will be created
  + resource "hcloud_server" "control_plane2" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-20.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster" = "cedi-dev"
          + "role"    = "api"
        }
      + location                   = "nbg1"
      + name                       = "api2.cedi.dev"
      + placement_group_id         = 110047
      + rebuild_protection         = false
      + server_type                = "cx21"
      + ssh_keys                   = [
          + "cedi@ivy",
          + "cedi@mae",
          + "ghaction",
        ]
      + status                     = (known after apply)
    }

  # hcloud_server.control_plane3 will be created
  + resource "hcloud_server" "control_plane3" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-20.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster" = "cedi-dev"
          + "role"    = "api"
        }
      + location                   = "nbg1"
      + name                       = "api3.cedi.dev"
      + placement_group_id         = 110047
      + rebuild_protection         = false
      + server_type                = "cx21"
      + ssh_keys                   = [
          + "cedi@ivy",
          + "cedi@mae",
          + "ghaction",
        ]
      + status                     = (known after apply)
    }

  # hcloud_server_network.control_plane1 will be created
  + resource "hcloud_server_network" "control_plane1" {
      + id          = (known after apply)
      + ip          = (known after apply)
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = (known after apply)
    }

  # hcloud_server_network.control_plane2 will be created
  + resource "hcloud_server_network" "control_plane2" {
      + id          = (known after apply)
      + ip          = (known after apply)
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = (known after apply)
    }

  # hcloud_server_network.control_plane3 will be created
  + resource "hcloud_server_network" "control_plane3" {
      + id          = (known after apply)
      + ip          = (known after apply)
      + mac_address = (known after apply)
      + server_id   = (known after apply)
      + subnet_id   = (known after apply)
    }

  # hcloud_ssh_key.cedi_devpi will be created
  + resource "hcloud_ssh_key" "cedi_devpi" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "cedi@devpi"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFd4lpqMI7I9fPboMNhGzVrel0cir3D7bHLHADqE1Kmf"
    }

  # hcloud_ssh_key.cedi_ivy will be created
  + resource "hcloud_ssh_key" "cedi_ivy" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "cedi@ivy"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKKQwlVWSGICyOiVryEdEp8bR+ltCxSeikxPTRRgSssL"
    }

  # hcloud_ssh_key.cedi_mae will be created
  + resource "hcloud_ssh_key" "cedi_mae" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "cedi@mae"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOO9DMiwRjCCWvMA9TKYxRApgQx3g+owxkq9jy1YyjGN cedi@mae"
    }

  # hcloud_ssh_key.ghaction will be created
  + resource "hcloud_ssh_key" "ghaction" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "ghaction"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKoYpUOVAPNNLTi2sq8pouG2QTrdOccPBKYbPXfdByMz"
    }

Plan: 25 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + kubeone_hosts = {
      + control_plane = {
          + cloud_provider       = "hetzner"
          + cluster_name         = "cedi-dev"
          + network_id           = (known after apply)
          + private_address      = [
              + (known after apply),
              + (known after apply),
              + (known after apply),
            ]
          + public_address       = [
              + (known after apply),
              + (known after apply),
              + (known after apply),
            ]
          + ssh_agent_socket     = ""
          + ssh_port             = 22
          + ssh_private_key_file = "/home/runner/.ssh/id_ed25519"
          + ssh_user             = "root"
        }
    }

Author: @renovate[bot],
Action: pull_request

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants